Inductive reasoning refers to a variety of methods of reasoning in which the conclusion of an argument is supported not with deductive certainty, but at best with some degree of probability. Unlike deductive reasoning (such as mathematical induction), where the conclusion is certain, given the premises are correct, inductive reasoning produces conclusions that are at best probable, given the evidence provided.
For example, if there are 20 balls—either black or white—in an urn: to estimate their respective numbers, a sample of four balls is drawn, three are black and one is white. An inductive generalization may be that there are 15 black and five white balls in the urn. However this is only one of 17 possibilities as to the actual number of each color of balls in the urn (the population) -- there may, of course, have been 19 black and just 1 white ball, or only 3 black balls and 17 white, or any mix in between. The probability of each possible distribution being the actual numbers of black and white balls can be estimated using techniques such as Bayesian inference, where prior assumptions about the distribution are updated with the observed sample, or maximum likelihood estimation (MLE), which identifies the distribution most likely given the observed sample.
How much the premises support the conclusion depends upon the number in the sample group, the number in the population, the degree to which the sample represents the population (which, for a static population, may be achieved by taking a random sample). The extent to which the sample represents the population depends on the reliability of the procedure used for individual observations, which is not always as simple as taking a random element from a static population, which in itself is not always simple. The greater the sample size relative to the population and the more closely the sample represents the population, the stronger the generalization is. The hasty generalization and the biased sample are generalization fallacies.
The measure is highly reliable within a well-defined margin of error provided that the selection process was genuinely random and that the numbers of items in the sample having the properties considered are large. It is readily quantifiable. Compare the preceding argument with the following. "Six of the ten people in my book club are Libertarians. Therefore, about 60% of people are Libertarians." The argument is weak because the sample is non-random and the sample size is very small.
Statistical generalizations are also called statistical projectionsSchaum's Outlines, Logic, Second Edition. John Nolt, Dennis Rohatyn, Archille Varzi. McGraw-Hill, 1998. p. 223 and sample projections.Schaum's Outlines, Logic, p. 230
This inference is less reliable (and thus more likely to commit the fallacy of hasty generalization) than a statistical generalization, first, because the sample events are non-random, and second because it is not reducible to a mathematical expression. Statistically speaking, there is simply no way to know, measure and calculate the circumstances affecting performance that will occur in the future. On a philosophical level, the argument relies on the presupposition that the operation of future events will mirror the past. In other words, it takes for granted a uniformity of nature, an unproven principle that cannot be derived from the empirical data itself. Arguments that tacitly presuppose this uniformity are sometimes called Humean after the philosopher who was first to subject them to philosophical scrutiny.Introduction to Logic. Gensler p. 280
For example:
This is a statistical syllogism.Introduction to Logic. Harry J. Gensler, Rutledge, 2002. p. 268 Even though one cannot be sure Bob will attend university, the exact probability of this outcome is fully assured (given no further information). Two dicto simpliciter fallacies can occur in statistical syllogisms: "accident" and "converse accident".
Analogical reasoning is very frequent in common sense, science, philosophy, law, and the humanities, but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning.For more information on inferences by analogy, see Juthe, 2005 .
This is analogical induction, according to which things alike in certain ways are more prone to be alike in other ways. This form of induction was explored in detail by philosopher John Stuart Mill in his System of Logic, where he states, "there can be no doubt that every resemblance not affords some degree of probability, beyond what would otherwise exist, in favor of the conclusion."A System of Logic. Mill 1843/1930. p. 333 See Mill's Methods.
Some thinkers contend that analogical induction is a subcategory of inductive generalization because it assumes a pre-established uniformity governing events. Analogical induction requires an auxiliary examination of the relevancy of the characteristics cited as common to the pair. In the preceding example, if a premise were added stating that both stones were mentioned in the records of early Spanish explorers, this common attribute is extraneous to the stones and does not contribute to their probable affinity.
A pitfall of analogy is that features can be cherry-picked: while objects may show striking similarities, two things juxtaposed may respectively possess other characteristics not identified in the analogy that are characteristics sharply dissimilar. Thus, analogy can mislead if not all relevant comparisons are made.
The most basic form of enumerative induction reasons from particular instances to all instances and is thus an unrestricted generalization. If one observes 100 swans, and all 100 were white, one might infer a probable universal categorical proposition of the form All swans are white. As this argument form's premises, even if true, do not entail the conclusion's truth, this is a form of inductive inference. The conclusion might be true, and might be thought probably true, yet it can be false. Questions regarding the justification and form of enumerative inductions have been central in philosophy of science, as enumerative induction has a pivotal role in the traditional model of the scientific method.
This is enumerative induction, also known as simple induction or simple predictive induction. It is a subcategory of inductive generalization. In everyday practice, this is perhaps the most common form of induction. For the preceding argument, the conclusion is tempting but makes a prediction well in excess of the evidence. First, it assumes that life forms observed until now can tell us how future cases will be: an appeal to uniformity. Second, the conclusion All is a bold assertion. A single contrary instance foils the argument. And last, quantifying the level of probability in any mathematical form is problematic.Schaum's Outlines, Logic, pp. 243–35 By what standard do we measure our Earthly sample of known life against all (possible) life? Suppose we do discover some new organism—such as some microorganism floating in the mesosphere or an asteroid—and it is cellular. Does the addition of this corroborating evidence oblige us to raise our probability assessment for the subject proposition? It is generally deemed reasonable to answer this question "yes", and for a good many this "yes" is not only reasonable but incontrovertible. So then just how much should this new data change our probability assessment? Here, consensus melts away, and in its place arises a question about whether we can talk of probability coherently at all with or without numerical quantification.
This is enumerative induction in its weak form. It truncates "all" to a mere single instance and, by making a far weaker claim, considerably strengthens the probability of its conclusion. Otherwise, it has the same shortcomings as the strong form: its sample population is non-random, and quantification methods are elusive.
There are three ways of attacking an argument; these ways - known as defeaters in defeasible reasoning literature - are : rebutting, undermining, and undercutting. Rebutting defeats by offering a counter-example, undermining defeats by questioning the validity of the evidence, and undercutting defeats by pointing out conditions where a conclusion is not true when the inference is. By identifying defeaters and proving them wrong is how this approach builds confidence.
This type of induction may use different methodologies such as quasi-experimentation, which tests and, where possible, eliminates rival hypotheses.
Eliminative induction is crucial to the scientific method and is used to eliminate hypotheses that are inconsistent with observations and experiments. It focuses on possible causes instead of observed actual instances of causal connections.
The Dogmatic school of ancient Greek medicine employed analogismos as a method of inference.Galen On Medical Experience, 24. This method used analogy to reason from what was observed to unobservable forces.
Since Hume first wrote about the dilemma between the invalidity of deductive arguments and the circularity of inductive arguments in support of the uniformity of nature, this supposed dichotomy between merely two modes of inference, deduction and induction, has been contested with the discovery of a third mode of inference known as abduction, or abductive reasoning, which was first formulated and advanced by Charles Sanders Peirce, in 1886, where he referred to it as "reasoning by hypothesis." Inference to the best explanation is often, yet arguably, treated as synonymous to abduction as it was first identified by Gilbert Harman in 1965 where he referred to it as "abductive reasoning," yet his definition of abduction slightly differs from Pierce's definition. Regardless, if abduction is in fact a third mode of inference rationally independent from the other two, then either the uniformity of nature can be rationally justified through abduction, or Hume's dilemma is more of a trilemma. Hume was also skeptical of the application of enumerative induction and reason to reach certainty about unobservables and especially the inference of causality from the fact that modifying an aspect of a relationship prevents or produces a particular outcome.
Reasoning that the mind must contain its own categories for organizing sense data, making experience of objects in space and time (phenomena) possible, Kant concluded that the uniformity of nature was an a priori truth. A class of synthetic statements that was not contingent but true by necessity, was then synthetic a priori. Kant thus saved both metaphysics and Newton's law of universal gravitation. On the basis of the argument that what goes beyond our knowledge is "nothing to us,"Cf. he discarded scientific realism. Kant's position that knowledge comes about by a cooperation of perception and our capacity to think (transcendental idealism) gave birth to the movement of German idealism. Hegel's absolute idealism subsequently flourished across continental Europe and England.
According to Comte, scientific method frames predictions, confirms them, and states laws—positive statements—irrefutable by theology or by metaphysics. Regarding experience as justifying enumerative induction by demonstrating the uniformity of nature,Wesley C Salmon, "The uniformity of Nature" , Philosophy and Phenomenological Research, 1953 Sep; 14(1):39–48, 39. the British philosopher John Stuart Mill welcomed Comte's positivism, but thought scientific laws susceptible to recall or revision and Mill also withheld from Comte's Religion of Humanity. Comte was confident in treating scientific law as an foundationalism, and believed that churches, honouring eminent scientists, ought to focus public mindset on altruism—a term Comte coined—to apply science for humankind's social welfare via sociology, Comte's leading science.
During the 1830s and 1840s, while Comte and Mill were the leading philosophers of science, William Whewell found enumerative induction not nearly as convincing, and, despite the dominance of inductivism, formulated "superinduction".Roberto Torretti, The Philosophy of Physics (Cambridge: Cambridge University Press, 1999), 219–21 [216] . Whewell argued that "the peculiar import of the term Induction" should be recognised: "there is some Conception superinduced upon the facts", that is, "the Invention of a new Conception in every inductive inference". The creation of Conceptions is easily overlooked and prior to Whewell was rarely recognised. Whewell explained:
These "superinduced" explanations may well be flawed, but their accuracy is suggested when they exhibit what Whewell termed consilience—that is, simultaneously predicting the inductive generalizations in multiple areas—a feat that, according to Whewell, can establish their truth. Perhaps to accommodate the prevailing view of science as inductivist method, Whewell devoted several chapters to "methods of induction" and sometimes used the phrase "logic of induction", despite the fact that induction lacks rules and cannot be trained.
In the 1870s, the originator of pragmatism, C S Peirce performed vast investigations that clarified the basis of deductive inference as a mathematical proof (as, independently, did Gottlob Frege). Peirce recognized induction but always insisted on a third type of inference that Peirce variously termed abduction or retroduction or hypothesis or presumption.Roberto Torretti, The Philosophy of Physics (Cambridge: Cambridge University Press, 1999), pp. 226 , 228–29 . Later philosophers termed Peirce's abduction, etc., Inference to the Best Explanation (IBE).
The futility of attaining certainty through some critical mass of probability can be illustrated with a coin-toss exercise. Suppose someone tests whether a coin is either a fair one or two-headed. They flip the coin ten times, and ten times it comes up heads. At this point, there is a strong reason to believe it is two-headed. After all, the chance of ten heads in a row is .000976: less than one in one thousand. Then, after 100 flips, every toss has come up heads. Now there is "virtual" certainty that the coin is two-headed, and one can regard it as "true" that the coin is probably two-headed. Still, one can neither logically nor empirically rule out that the next toss will produce tails. No matter how many times in a row it comes up heads, this remains the case. If one programmed a machine to flip a coin over and over continuously, at some point the result would be a string of 100 heads. In the fullness of time, all combinations will appear.
As for the slim prospect of getting ten out of ten heads from a fair coin—the outcome that made the coin appear biased—many may be surprised to learn that the chance of any sequence of heads or tails is equally unlikely (e.g., H-H-T-T-H-T-H-H-H-T) and yet it occurs in every trial of ten tosses. That means all results for ten tosses have the same probability as getting ten out of ten heads, which is 0.000976. If one records the heads-tails sequences, for whatever result, that exact sequence had a chance of 0.000976.
An argument is deductive when the conclusion is necessary given the premises. That is, the conclusion must be true if the premises are true. For example, after getting 10 heads in a row one might deduce that the coin had met some statistical criterion to be regarded as probably two-sided, a conclusion that would not be falsified even if the next toss yielded tails.
If a deductive conclusion follows duly from its premises, then it is valid; otherwise, it is invalid (that an argument is invalid is not to say its conclusions are false; it may have a true conclusion, just not on account of the premises). An examination of the following examples will show that the relationship between premises and conclusion is such that the truth of the conclusion is already implicit in the premises. Bachelors are unmarried because we say they are; we have defined them so. Socrates is mortal because we have included him in a set of beings that are mortal. The conclusion for a valid deductive argument is already contained in the premises since its truth is strictly a matter of logical relations. It cannot say more than its premises. Inductive premises, on the other hand, draw their substance from fact and evidence, and the conclusion accordingly makes a factual claim or prediction. Its reliability varies proportionally with the evidence. Induction wants to reveal something new about the world. One could say that induction wants to say more than is contained in the premises.
To better see the difference between inductive and deductive arguments, consider that it would not make sense to say: "all rectangles so far examined have four right angles, so the next one I see will have four right angles." This would treat logical relations as something factual and discoverable, and thus variable and uncertain. Likewise, speaking deductively we may permissibly say. "All unicorns can fly; I have a unicorn named Charlie; thus Charlie can fly." This deductive argument is valid because the logical relations hold; we are not interested in their factual soundness.
The conclusions of inductive reasoning are inherently Uncertainty. It only deals with the extent to which, given the premises, the conclusion is "credible" according to some theory of evidence. Examples include a many-valued logic, Dempster–Shafer theory, or Probability with rules for inference such as Bayes' rule. Unlike deductive reasoning, it does not rely on universals holding over a closed domain of discourse to draw conclusions, so it can be applicable even in cases of epistemic uncertainty (technical issues with this may arise however; for example, the second axiom of probability is a closed-world assumption).
Another crucial difference between these two types of argument is that deductive certainty is impossible in non-axiomatic or empirical systems such as reality, leaving inductive reasoning as the primary route to (probabilistic) knowledge of such systems.
Given that "if A is true then that would cause B, C, and D to be true", an example of deduction would be " A is true therefore we can deduce that B, C, and D are true". An example of induction would be " B, C, and D are observed to be true therefore A might be true". A is a Causality explanation for B, C, and D being true.
For example:
Note, however, that the asteroid explanation for the mass extinction is not necessarily correct. Other events with the potential to affect global climate also coincide with the extinction of the non-avian dinosaurs. For example, the release of (particularly sulfur dioxide) during the formation of the Deccan Traps in India.
Another example of an inductive argument:
This argument could have been made every time a new biological life form was found, and would have had a correct conclusion every time; however, it is still possible that in the future a biological life form not requiring liquid water could be discovered. As a result, the argument may be stated as:
A classical example of an "incorrect" statistical syllogism was presented by John Vickers:
The conclusion fails because the population of swans then known was not actually representative of all swans. A more reasonable conclusion would be: in line with applicable conventions, we might reasonably expect all swans in England to be white, at least in the short-term.
Succinctly put: deduction is about certainty/necessity; induction is about probability. Any single assertion will answer to one of these two criteria. Another approach to the analysis of reasoning is that of modal logic, which deals with the distinction between the necessary and the possible in a way not concerned with probabilities among things deemed possible.
The philosophical definition of inductive reasoning is more nuanced than a simple progression from particular/individual instances to broader generalizations. Rather, the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entailment it; that is, they suggest truth but do not ensure it. In this manner, there is the possibility of moving from general statements to individual instances (for example, statistical syllogisms).
Note that the definition of inductive reasoning described here differs from mathematical induction, which, in fact, is a form of deductive reasoning. Mathematical induction is used to provide strict proofs of the properties of recursively defined sets. The deductive nature of mathematical induction derives from its basis in a non-finite number of cases, in contrast with the finite number of cases involved in an enumerative induction procedure like proof by exhaustion. Both mathematical induction and proof by exhaustion are examples of complete induction. Complete induction is a masked type of deductive reasoning.
Hume nevertheless stated that even if induction were proved unreliable, we would still have to rely on it. So instead of a position of severe skepticism, Hume advocated a practical skepticism based on common sense, where the inevitability of induction is accepted.Vickers, John. "The Problem of Induction" (Section 2.1). Stanford Encyclopedia of Philosophy. 21 June 2010. Bertrand Russell illustrated Hume's skepticism in a story about a chicken who, fed every morning without fail and following the laws of induction, concluded that this feeding would always continue, until his throat was eventually cut by the farmer.
In 1963, Karl Popper wrote, "Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure."Donald Gillies, "Problem-solving and the problem of induction", in Rethinking Popper (Dordrecht: Springer, 2009), Zuzana Parusniková & Robert S Cohen, eds, pp. 103–05. Popper's 1972 book Objective Knowledge—whose first chapter is devoted to the problem of induction—opens, "I think I have solved a major philosophical problem: the problem of induction". In Popper's schema, enumerative induction is "a kind of optical illusion" cast by the steps of conjecture and refutation during a problem shift. An imaginative leap, the tentative solution is improvised, lacking inductive rules to guide it. The resulting, unrestricted generalization is deductive, an entailed consequence of all explanatory considerations. Controversy continued, however, with Popper's putative solution not generally accepted.Ch 5 "The controversy around inductive logic" in Richard Mattessich, ed, Instrumental Reasoning and Systems Methodology: An Epistemology of the Applied and Social Sciences (Dordrecht: D. Reidel Publishing, 1978), pp. 141–43 .
Donald A. Gillies argues that rules of inferences related to inductive reasoning are overwhelmingly absent from science, and describes most scientific inferences as "involving conjectures thought up by human ingenuity and creativity, and by no means inferred in any mechanical fashion, or according to precisely specified rules." Gillies also provides a rare counterexample "in the machine learning programs of AI."Donald Gillies, "Problem-solving and the problem of induction", in Rethinking Popper (Dordrecht: Springer, 2009), Zuzana Parusniková & Robert S Cohen, eds, p. 111 : "I argued earlier that there are some exceptions to Popper's claim that rules of inductive inference do not exist. However, these exceptions are relatively rare. They occur, for example, in the machine learning programs of AI. For the vast bulk of human science both past and present, rules of inductive inference do not exist. For such science, Popper's model of conjectures which are freely invented and then tested out seems to be more accurate than any model based on inductive inferences. Admittedly, there is talk nowadays in the context of science carried out by humans of 'inference to the best explanation' or 'abductive inference', but such so-called inferences are not at all inferences based on precisely formulated rules like the deductive rules of inference. Those who talk of 'inference to the best explanation' or 'abductive inference', for example, never formulate any precise rules according to which these so-called inferences take place. In reality, the 'inferences' which they describe in their examples involve conjectures thought up by human ingenuity and creativity, and by no means inferred in any mechanical fashion, or according to precisely specified rules".
The availability heuristic is regarded as causing the reasoner to depend primarily upon information that is readily available. People have a tendency to rely on information that is easily accessible in the world around them. For example, in surveys, when people are asked to estimate the percentage of people who died from various causes, most respondents choose the causes that have been most prevalent in the media such as terrorism, murders, and airplane accidents, rather than causes such as disease and traffic accidents, which have been technically "less accessible" to the individual since they are not emphasized as heavily in the world around them.
Confirmation bias is based on the natural tendency to confirm rather than deny a hypothesis. Research has demonstrated that people are inclined to seek solutions to problems that are more consistent with known hypotheses rather than attempt to refute those hypotheses. Often, in experiments, subjects will ask questions that seek answers that fit established hypotheses, thus confirming these hypotheses. For example, if it is hypothesized that Sally is a sociable individual, subjects will naturally seek to confirm the premise by asking questions that would produce answers confirming that Sally is, in fact, a sociable individual.
The predictable-world bias revolves around the inclination to perceive order where it has not been proved to exist, either at all or at a particular level of abstraction. Gambling, for example, is one of the most popular examples of predictable-world bias. Gamblers often begin to think that they see simple and obvious patterns in the outcomes and therefore believe that they are able to predict outcomes based on what they have witnessed. In reality, however, the outcomes of these games are difficult to predict and highly complex in nature. In general, people tend to seek some type of simplistic order to explain or justify their beliefs and experiences, and it is often difficult for them to realise that their perceptions of order may be entirely different from the truth.
Inductive inference typically considers hypothesis classes with a countable size. A recent advance established a sufficient and necessary condition for inductive inference: a finite error bound is guaranteed if and only if the hypothesis class is a countable union of online learnable classes. Notably, this condition allows the hypothesis class to have an uncountable size while remaining learnable within this framework.
|
|